Goto

Collaborating Authors

 Stutsman County


If the US Has to Build Data Centers, Here's Where They Should Go

WIRED

If the US Has to Build Data Centers, Here's Where They Should Go A new analysis tries to calculate the coming environmental footprint of AI in the US and finds that the ideal sites for data centers aren't where they're being built. A data center for cryptocurrency mining, cloud services, and AI computing in Stutsman County, North Dakota.Video: halbergman/Getty Images Tech companies have invested so much money in building data centers in recent months, it's actively driving the US economy--and the AI race is showing no signs of slowing down. Meta chief Mark Zuckerberg told President Donald Trump last week that the company would spend $600 billion on US infrastructure--including data centers--by 2028, while OpenAI has committed already to spending $1.4 trillion. An extensive new analysis looks at the environmental footprint of data centers in the US to get a handle on what, exactly, the country might be facing as this buildout continues over the next few years--and where the US should be building data centers to avoid the most harmful environmental impacts. The study, published in the journal Nature Communications on Monday, uses a variety of data, including demand for AI chips and information on state electricity and water scarcity, to project the potential environmental impacts of future data centers through the end of the decade. The study models a number of different possible scenarios on how data centers could affect the US and the planet--and cautions that tech companies' net zero promises aren't likely to hold up against the energy and water needs of the massive facilities they're building.


Predicting Delayed Trajectories Using Network Features: A Study on the Dutch Railway Network

Kampere, Merel, Alsahag, Ali Mohammed Mansoor

arXiv.org Artificial Intelligence

The Dutch railway network is one of the busiest in the world, with delays being a prominent concern for the principal passenger railway operator NS. This research addresses a gap in delay prediction studies within the Dutch railway network by employing an XGBoost Classifier with a focus on topological features. Current research predominantly emphasizes short-term predictions and neglects the broader network-wide patterns essential for mitigating ripple effects. This research implements and improves an existing methodology, originally designed to forecast the evolution of the fast-changing US air network, to predict delays in the Dutch Railways. By integrating Node Centrality Measures and comparing multiple classifiers like RandomForest, DecisionTree, GradientBoosting, AdaBoost, and LogisticRegression, the goal is to predict delayed trajectories. However, the results reveal limited performance, especially in non-simultaneous testing scenarios, suggesting the necessity for more context-specific adaptations. Regardless, this research contributes to the understanding of transportation network evaluation and proposes future directions for developing more robust predictive models for delays.


Ban on AI Regulations in Trump's Tax Bill Carries a Huge Environmental Cost

Mother Jones

A data center for cryptocurrency mining, cloud services, and AI computing in Stutsman County, North Dakota.halbergman/Getty This story was originally published by the Guardian and is reproduced here as part of the Climate Desk collaboration. Republicans are pushing to pass a major spending bill that includes provisions to prevent states from enacting regulations on artificial intelligence. Such untamed growth in AI will take a heavy toll upon the world's dangerously overheating climate, experts have warned. About 1 billion tons of planet-heating carbon dioxide are set to be emitted in the US just from AI over the next decade if no restraints are placed on the industry's enormous electricity consumption, according to estimates by researchers at Harvard University and provided to the Guardian.


Language hooks: a modular framework for augmenting LLM reasoning that decouples tool usage from the model and its prompt

de Mijolla, Damien, Yang, Wen, Duckett, Philippa, Frye, Christopher, Worrall, Mark

arXiv.org Artificial Intelligence

Prompting and fine-tuning have emerged as two competing paradigms for augmenting language models with new capabilities, such as the use of tools. Prompting approaches are quick to set up but rely on providing explicit demonstrations of each tool's usage in the model's prompt, thus coupling tool use to the task at hand and limiting generalisation. Fine-tuning removes the need for task-specific demonstrations of tool usage at runtime; however, this ties new capabilities to a single model, thus making already-heavier setup costs a recurring expense. In this paper, we introduce language hooks, a novel framework for augmenting language models with new capabilities that is decoupled both from the model's task-specific prompt and from the model itself. The language hook algorithm interleaves text generation by the base model with the execution of modular programs that trigger conditionally based on the existing text and the available capabilities. Upon triggering, programs may call external tools, auxiliary language models (e.g. using tool specific prompts), and modify the existing context. We benchmark our method against state-of-the-art baselines, find that it outperforms task-aware approaches, and demonstrate its ability to generalise to novel tasks.